# Open-domain recognition
Resnet101 Clip Gap.openai
Apache-2.0
ResNet101 image encoder based on CLIP framework, extracting image features through Global Average Pooling (GAP)
Image Classification
Transformers

R
timm
104
0
Resnet50 Clip Gap.openai
Apache-2.0
A ResNet50 variant based on the visual encoder part of the CLIP model, extracting image features through Global Average Pooling (GAP)
Image Classification
Transformers

R
timm
250
1
Eva Giant Patch14 Clip 224.laion400m
MIT
The EVA CLIP model is a vision-language model based on OpenCLIP and the timm framework, supporting zero-shot image classification tasks.
Text-to-Image
E
timm
124
0
Eva02 Enormous Patch14 Clip 224.laion2b
MIT
EVA-CLIP is a vision-language model based on the CLIP architecture, supporting zero-shot image classification tasks.
Text-to-Image
E
timm
38
0
Eva02 Base Patch16 Clip 224.merged2b
MIT
The EVA CLIP model is a vision-language model built on the OpenCLIP and timm frameworks, supporting tasks like zero-shot image classification.
Text-to-Image
E
timm
3,029
0
Vit Huge Patch14 Clip 224.laion2b
Apache-2.0
ViT-Huge visual encoder based on the CLIP framework, trained on the laion2B dataset, supports image feature extraction
Image Classification
Transformers

V
timm
1,969
0
Vit Base Patch32 Clip 224.laion2b
Apache-2.0
Vision Transformer model based on CLIP architecture, designed for image feature extraction, trained on the laion2B dataset
Image Classification
Transformers

V
timm
83
0
Vit Huge Patch14 Clip 224.metaclip 2pt5b
A dual-purpose vision-language model trained on the MetaCLIP-2.5B dataset, supporting zero-shot image classification tasks
Image Classification
V
timm
3,173
0
Vit Large Patch14 Clip 224.metaclip 2pt5b
A dual-framework compatible vision model trained on MetaCLIP-2.5B dataset, supporting zero-shot image classification tasks
Image Classification
V
timm
2,648
0
Resnet50x16 Clip.openai
MIT
ResNet50x16 visual model based on the CLIP framework, supporting zero-shot image classification tasks
Image Classification
R
timm
702
0
Resnet50 Clip.openai
MIT
Zero-shot image classification model based on ResNet50 architecture and CLIP technology
Image Classification
R
timm
11.91k
0
Vit Xsmall Patch16 Clip 224.tinyclip Yfcc15m
MIT
A compact vision-language model based on CLIP architecture, designed for efficient zero-shot image classification
Image Classification
V
timm
444
0
Vit Betwixt Patch32 Clip 224.tinyclip Laion400m
MIT
A small CLIP model based on ViT architecture, suitable for zero-shot image classification tasks, trained on the LAION-400M dataset.
Image Classification
V
timm
113
1
Vit Medium Patch32 Clip 224.tinyclip Laion400m
MIT
A vision-language model based on the OpenCLIP library, supporting zero-shot image classification tasks.
Image Classification
V
timm
110
0
Vit B 16 Aion400m E32 1finetuned 1
MIT
Vision Transformer model based on OpenCLIP framework, fine-tuned for zero-shot image classification tasks
Image Classification
V
Albe-njupt
18
1
CLIP ViT B 32 Laion2b E16
MIT
A vision-language pretrained model implemented based on OpenCLIP, supporting zero-shot image classification tasks
Text-to-Image
C
justram
89
0
CLIP ViT L 14 CommonPool.XL S13b B90k
MIT
A vision-language pretrained model based on the CLIP architecture, supporting zero-shot image classification and cross-modal retrieval tasks
Text-to-Image
C
laion
4,255
2
CLIP ViT B 16 CommonPool.L.basic S1b B8k
MIT
A vision-language model based on the CLIP architecture, supporting zero-shot image classification tasks
Text-to-Image
C
laion
57
0
CLIP ViT B 32 CommonPool.S.laion S13m B4k
MIT
A vision-language model based on the CLIP architecture, supporting zero-shot image classification tasks
Text-to-Image
C
laion
58
0
CLIP ViT B 32 CommonPool.S.image S13m B4k
MIT
A vision-language model based on the CLIP architecture, supporting zero-shot image classification tasks
Text-to-Image
C
laion
60
0
CLIP ViT B 32 CommonPool.S.text S13m B4k
MIT
A vision-language model based on the CLIP architecture, supporting zero-shot image classification tasks
Text-to-Image
C
laion
57
0
Featured Recommended AI Models